Goto

Collaborating Authors

 Stuttgart


Robot Talk Episode 107 – Animal-inspired robot movement, with Robert Siddall

Robohub

Claire chatted to Robert Siddall from the University of Surrey about novel robot designs inspired by the way real animals move. Robert Siddall is an aerospace engineer with an enthusiasm for unconventional robotics. He is interested in understanding animal locomotion for the benefit of synthetic locomotion, particularly flight. Before becoming a Lecturer at the University of Surrey, he worked at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, where he studied the arboreal acrobatics of rainforest-dwelling reptiles. His work focuses on the design of novel robots that can tackle important environmental problems.


3D Gaussian Splatting aided Localization for Large and Complex Indoor-Environments

arXiv.org Artificial Intelligence

Recent breakthroughs in deep learning, including 3D Gaussian Splatting (3DGS) (Kerbl et al., 2024), have significantly advanced both the performance and visual quality of the reconstruction. Within our work, we focus on 3D mapping of complex, large-scale indoor environments such as construction sites and factory halls. This initiative is driven by a project within the Cluster of Excellence Integrative Computational Design and Construction for Architecture (IntCDC) at the University of Stuttgart, which aims to enable autonomous indoor construction for new or preexisting buildings (IntCDC, 2024a). Typical construction tasks, including material handling and element assembly, require highly accurate mapping approaches to enable precise localization of both building components and the construction robots. Image-based localization methods are particularly valuable due to the widespread availability and low cost of cameras, which are now standard equipment on most modern robots.


Synergistic Traffic Assignment

arXiv.org Artificial Intelligence

Traffic assignment analyzes traffic flows in road networks that emerge due to traveler interaction. Traditionally, travelers are assumed to use private cars, so road costs grow with the number of users due to congestion. However, in sustainable transit systems, travelers share vehicles s.t. more users on a road lead to higher sharing potential and reduced cost per user. Thus, we invert the usual avoidant traffic assignment (ATA) and instead consider synergistic traffic assignment (STA) where road costs decrease with use. We find that STA is significantly different from ATA from a game-theoretical point of view. We show that a simple iterative best-response method with simultaneous updates converges to an equilibrium state. This enables efficient computation of equilibria using optimized speedup techniques for shortest-path queries. In contrast, ATA requires slower sequential updates or more complicated iteration schemes that only approximate an equilibrium. Experiments with a realistic scenario for the city of Stuttgart indicate that STA indeed quickly converges to an equilibrium. We envision STA as a part of software-defined transportation systems that dynamically adapt to current travel demand. As a first demonstration, we show that an STA equilibrium can be used to incorporate traveler synergism in a simple bus line planning algorithm to potentially greatly reduce the required vehicle resources.


Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention: Supplementary Material University of Stuttgart, Institute for Visualization and Interactive Systems (VIS), Germany

Neural Information Processing Systems

To gain further insight into the comparison between our model and the current state of the art in sentence compression, we show results of our method and ablations in relation to ablations of the method by Zhao et al. [4] (see Table 1). In their work, the authors added a "syntax-based language model" to their sentence compression network with which they obtained the state-of-the-art performance of 85.1 F1 score. The authors employ a syntax-based language model which is trained to learn the syntactic dependencies between lexical items in the given input sequence. Together with this language model, they use a reinforcement learning algorithm to improve the deletion proposed by their Bi-LSTM model. Using a naive language model without syntactic features their model obtained a F1 score of 85.0.


Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention University of Stuttgart, Institute for Visualization and Interactive Systems (VIS), Germany

Neural Information Processing Systems

A lack of corpora has so far limited advances in integrating human gaze data as a supervisory signal in neural attention mechanisms for natural language processing (NLP). We propose a novel hybrid text saliency model (TSM) that, for the first time, combines a cognitive model of reading with explicit human gaze supervision in a single machine learning framework. On four different corpora we demonstrate that our hybrid TSM duration predictions are highly correlated with human gaze ground truth. We further propose a novel joint modeling approach to integrate TSM predictions into the attention layer of a network designed for a specific upstream NLP task without the need for any task-specific human gaze data. We demonstrate that our joint model outperforms the state of the art in paraphrase generation on the Quora Question Pairs corpus by more than 10% in BLEU-4 and achieves state of the art performance for sentence compression on the challenging Google Sentence Compression corpus. As such, our work introduces a practical approach for bridging between data-driven and cognitive models and demonstrates a new way to integrate human gaze-guided neural attention into NLP tasks.


Introduction to AI Planning

arXiv.org Artificial Intelligence

These are notes for lectures presented at the University of Stuttgart that provide an introduction to key concepts and techniques in AI Planning. Artificial Intelligence Planning, also known as Automated Planning, emerged somewhere in 1966 from the need to give autonomy to a wheeled robot. Since then, it has evolved into a flourishing research and development discipline, often associated with scheduling. Over the decades, various approaches to planning have been developed with characteristics that make them appropriate for specific tasks and applications. Most approaches represent the world as a state within a state transition system; then the planning problem becomes that of searching a path in the state space from the current state to one which satisfies the goals of the user. The notes begin by introducing the state model and move on to exploring classical planning, the foundational form of planning, and present fundamental algorithms for solving such problems. Subsequently, we examine planning as a constraint satisfaction problem, outlining the mapping process and describing an approach to solve such problems. The most extensive section is dedicated to Hierarchical Task Network (HTN) planning, one of the most widely used and powerful planning techniques in the field. The lecture notes end with a bonus chapter on the Planning Domain Definition (PDDL) Language, the de facto standard syntax for representing non-hierarchical planning problems.


The Overcooked Generalisation Challenge

arXiv.org Artificial Intelligence

We introduce the Overcooked Generalisation Challenge (OGC) - the first benchmark to study agents' zero-shot cooperation abilities when faced with novel partners and levels in the Overcooked-AI environment. This perspective starkly contrasts a large body of previous work that has trained and evaluated cooperating agents only on the same level, failing to capture generalisation abilities required for real-world human-AI cooperation. Our challenge interfaces with state-of-the-art dual curriculum design (DCD) methods to generate auto-curricula for training general agents in Overcooked. It is the first cooperative multi-agent environment specially designed for DCD methods and, consequently, the first benchmarked with state-of-the-art methods. It is fully GPU-accelerated, built on the DCD benchmark suite minimax, and freely available under an open-source license: https://git.hcics.simtech.uni-stuttgart.de/public-projects/OGC. We show that current DCD algorithms struggle to produce useful policies in this novel challenge, even if combined with recent network architectures that were designed for scalability and generalisability. The OGC pushes the boundaries of real-world human-AI cooperation by enabling the research community to study the impact of generalisation on cooperating agents.


Can Factual Statements be Deceptive? The DeFaBel Corpus of Belief-based Deception

arXiv.org Artificial Intelligence

If a person firmly believes in a non-factual statement, such as "The Earth is flat", and argues in its favor, there is no inherent intention to deceive. As the argumentation stems from genuine belief, it may be unlikely to exhibit the linguistic properties associated with deception or lying. This interplay of factuality, personal belief, and intent to deceive remains an understudied area. Disentangling the influence of these variables in argumentation is crucial to gain a better understanding of the linguistic properties attributed to each of them. To study the relation between deception and factuality, based on belief, we present the DeFaBel corpus, a crowd-sourced resource of belief-based deception. To create this corpus, we devise a study in which participants are instructed to write arguments supporting statements like "eating watermelon seeds can cause indigestion", regardless of its factual accuracy or their personal beliefs about the statement. In addition to the generation task, we ask them to disclose their belief about the statement. The collected instances are labelled as deceptive if the arguments are in contradiction to the participants' personal beliefs. Each instance in the corpus is thus annotated (or implicitly labelled) with personal beliefs of the author, factuality of the statement, and the intended deceptiveness. The DeFaBel corpus contains 1031 texts in German, out of which 643 are deceptive and 388 are non-deceptive. It is the first publicly available corpus for studying deception in German. In our analysis, we find that people are more confident in the persuasiveness of their arguments when the statement is aligned with their belief, but surprisingly less confident when they are generating arguments in favor of facts. The DeFaBel corpus can be obtained from https://www.ims.uni-stuttgart.de/data/defabel


What Makes Medical Claims (Un)Verifiable? Analyzing Entity and Relation Properties for Fact Verification

arXiv.org Artificial Intelligence

Biomedical claim verification fails if no evidence can be discovered. In these cases, the fact-checking verdict remains unknown and the claim is unverifiable. To improve upon this, we have to understand if there are any claim properties that impact its verifiability. In this work we assume that entities and relations define the core variables in a biomedical claim's anatomy and analyze if their properties help us to differentiate verifiable from unverifiable claims. In a study with trained annotation experts we prompt them to find evidence for biomedical claims, and observe how they refine search queries for their evidence search. This leads to the first corpus for scientific fact verification annotated with subject-relation-object triplets, evidence documents, and fact-checking verdicts (the BEAR-Fact corpus). We find (1) that discovering evidence for negated claims (e.g., X-does-not-cause-Y) is particularly challenging. Further, we see that annotators process queries mostly by adding constraints to the search and by normalizing entities to canonical names. (2) We compare our in-house annotations with a small crowdsourcing setting where we employ medical experts and laypeople. We find that domain expertise does not have a substantial effect on the reliability of annotations. Finally, (3), we demonstrate that it is possible to reliably estimate the success of evidence retrieval purely from the claim text~(.82\F), whereas identifying unverifiable claims proves more challenging (.27\F). The dataset is available at http://www.ims.uni-stuttgart.de/data/bioclaim.


Detection Defenses: An Empty Promise against Adversarial Patch Attacks on Optical Flow

arXiv.org Artificial Intelligence

Adversarial patches undermine the reliability of optical flow predictions when placed in arbitrary scene locations. Therefore, they pose a realistic threat to real-world motion detection and its downstream applications. Potential remedies are defense strategies that detect and remove adversarial patches, but their influence on the underlying motion prediction has not been investigated. In this paper, we thoroughly examine the currently available detect-and-remove defenses ILP and LGS for a wide selection of state-of-the-art optical flow methods, and illuminate their side effects on the quality and robustness of the final flow predictions. In particular, we implement defense-aware attacks to investigate whether current defenses are able to withstand attacks that take the defense mechanism into account. Our experiments yield two surprising results: Detect-and-remove defenses do not only lower the optical flow quality on benign scenes, in doing so, they also harm the robustness under patch attacks for all tested optical flow methods except FlowNetC. As currently employed detect-and-remove defenses fail to deliver the promised adversarial robustness for optical flow, they evoke a false sense of security. The code is available at https://github.com/cv-stuttgart/DetectionDefenses.